Multi-Task Multi-Sample Learning
نویسندگان
چکیده
In the exemplar SVM (E-SVM) approach of Malisiewicz et al., ICCV 2011, an ensemble of SVMs is learnt, with each SVM trained independently using only a single positive sample and all negative samples for the class. In this paper we develop a multi-sample learning (MSL) model which enables joint regularization of the E-SVMs without any additional cost over the original ensemble learning. The advantage of the MSL model is that the degree of sharing between positive samples can be controlled, such that the classification performance of either an ensemble of E-SVMs (sample independence) or a standard SVM (all positive samples used) is reproduced. However, between these two limits the model can exceed the performance of either. This MSL framework is inspired by multi-task learning approaches. We also introduce a multi-task extension to MSL and develop a multi-task multi-sample learning (MTMSL) model that encourages both sharing between classes and sharing between sample specific classifiers within each class. Both MSL and MTMSL have convex objective functions. The MSL and MTMSL models are evaluated on standard benchmarks including the MNIST, ‘Animals with attributes’ and the PASCAL VOC 2007 datasets. They achieve a significant performance improvement over both a standard SVM and an ensemble of E-SVMs.
منابع مشابه
Multi-Objective Multi-Task Learning
This dissertation presents multi-objective multi-task learning, a new learning framework. Given a fixed sequence of tasks, the learned hypothesis space must minimize multiple objectives. Since these objectives are often in conflict, we cannot find a single best solution, so we analyze a set of solutions. We first propose and analyze a new learning principle, empirically efficient learning. From...
متن کاملAdaptive Multi-task Sparse Learning with an Application to fMRI Study
In this paper, we consider the multi-task sparse learning problem under the assumption that the dimensionality diverges with the sample size. The traditional l1/l2 multi-task lasso does not enjoy the oracle property unless a rather strong condition is enforced. Inspired by adaptive lasso, we propose a multi-stage procedure, adaptive multi-task lasso, to simultaneously conduct model estimation a...
متن کاملImproving Agent Performance for Multi-Resource Negotiation Using Learning Automata and Case-Based Reasoning
In electronic commerce markets, agents often should acquire multiple resources to fulfil a high-level task. In order to attain such resources they need to compete with each other. In multi-agent environments, in which competition is involved, negotiation would be an interaction between agents in order to reach an agreement on resource allocation and to be coordinated with each other. In recent ...
متن کاملSample Complexity of Multi-task Reinforcement Learning
Transferring knowledge across a sequence of reinforcement-learning tasks is challenging, and has a number of important applications. Though there is encouraging empirical evidence that transfer can improve performance in subsequent reinforcement-learning tasks, there has been very little theoretical analysis. In this paper, we introduce a new multi-task algorithm for a sequence of reinforcement...
متن کاملRevisiting Stein's paradox: multi-task averaging
We present a multi-task learning approach to jointly estimate the means of multiple independent distributions from samples. The proposed multi-task averaging (MTA) algorithm results in a convex combination of the individual task’s sample averages. We derive the optimal amount of regularization for the two task case for the minimum risk estimator and a minimax estimator, and show that the optima...
متن کامل